Time-series anomaly detection is an important task and has been widely applied in the industry. Since manual data annotation is expensive and inefficient, most applications adopt unsupervised anomaly detection methods, but the results are usually sub-optimal and unsatisfactory to end customers. Weak supervision is a promising paradigm for obtaining considerable labels in a low-cost way, which enables the customers to label data by writing heuristic rules rather than annotating each instance individually. However, in the time-series domain, it is hard for people to write reasonable labeling functions as the time-series data is numerically continuous and difficult to be understood. In this paper, we propose a Label-Efficient Interactive Time-Series Anomaly Detection (LEIAD) system, which enables a user to improve the results of unsupervised anomaly detection by performing only a small amount of interactions with the system. To achieve this goal, the system integrates weak supervision and active learning collaboratively while generating labeling functions automatically using only a few labeled data. All of these techniques are complementary and can promote each other in a reinforced manner. We conduct experiments on three time-series anomaly detection datasets, demonstrating that the proposed system is superior to existing solutions in both weak supervision and active learning areas. Also, the system has been tested in a real scenario in industry to show its practicality.
translated by 谷歌翻译
The number of international benchmarking competitions is steadily increasing in various fields of machine learning (ML) research and practice. So far, however, little is known about the common practice as well as bottlenecks faced by the community in tackling the research questions posed. To shed light on the status quo of algorithm development in the specific field of biomedical imaging analysis, we designed an international survey that was issued to all participants of challenges conducted in conjunction with the IEEE ISBI 2021 and MICCAI 2021 conferences (80 competitions in total). The survey covered participants' expertise and working environments, their chosen strategies, as well as algorithm characteristics. A median of 72% challenge participants took part in the survey. According to our results, knowledge exchange was the primary incentive (70%) for participation, while the reception of prize money played only a minor role (16%). While a median of 80 working hours was spent on method development, a large portion of participants stated that they did not have enough time for method development (32%). 25% perceived the infrastructure to be a bottleneck. Overall, 94% of all solutions were deep learning-based. Of these, 84% were based on standard architectures. 43% of the respondents reported that the data samples (e.g., images) were too large to be processed at once. This was most commonly addressed by patch-based training (69%), downsampling (37%), and solving 3D analysis tasks as a series of 2D tasks. K-fold cross-validation on the training set was performed by only 37% of the participants and only 50% of the participants performed ensembling based on multiple identical models (61%) or heterogeneous models (39%). 48% of the respondents applied postprocessing steps.
translated by 谷歌翻译
Long-form numerical reasoning in financial analysis aims to generate a reasoning program to calculate the correct answer for a given question. Previous work followed a retriever-generator framework, where the retriever selects key facts from a long-form document, and the generator generates a reasoning program based on retrieved facts. However, they treated all facts equally without considering the different contributions of facts with and without numbers. Meanwhile, the program consistency were ignored under supervised training, resulting in lower training accuracy and diversity. To solve these problems, we proposed APOLLO to improve the long-form numerical reasoning framework. For the retriever, we adopt a number-aware negative sampling strategy to enable the retriever to be more discriminative on key numerical facts. For the generator, we design consistency-based reinforcement learning and target program augmentation strategy based on the consistency of program execution results. Experimental results on the FinQA and ConvFinQA leaderboard verify the effectiveness of our proposed method, achieving the new state-of-the-art.
translated by 谷歌翻译
In this paper, we propose Stochastic Knowledge Distillation (SKD) to obtain compact BERT-style language model dubbed SKDBERT. In each iteration, SKD samples a teacher model from a pre-defined teacher ensemble, which consists of multiple teacher models with multi-level capacities, to transfer knowledge into student model in an one-to-one manner. Sampling distribution plays an important role in SKD. We heuristically present three types of sampling distributions to assign appropriate probabilities for multi-level teacher models. SKD has two advantages: 1) it can preserve the diversities of multi-level teacher models via stochastically sampling single teacher model in each iteration, and 2) it can also improve the efficacy of knowledge distillation via multi-level teacher models when large capacity gap exists between the teacher model and the student model. Experimental results on GLUE benchmark show that SKDBERT reduces the size of a BERT$_{\rm BASE}$ model by 40% while retaining 99.5% performances of language understanding and being 100% faster.
translated by 谷歌翻译
The processing and recognition of geoscience images have wide applications. Most of existing researches focus on understanding the high-quality geoscience images by assuming that all the images are clear. However, in many real-world cases, the geoscience images might contain occlusions during the image acquisition. This problem actually implies the image inpainting problem in computer vision and multimedia. To the best of our knowledge, all the existing image inpainting algorithms learn to repair the occluded regions for a better visualization quality, they are excellent for natural images but not good enough for geoscience images by ignoring the geoscience related tasks. This paper aims to repair the occluded regions for a better geoscience task performance with the advanced visualization quality simultaneously, without changing the current deployed deep learning based geoscience models. Because of the complex context of geoscience images, we propose a coarse-to-fine encoder-decoder network with coarse-to-fine adversarial context discriminators to reconstruct the occluded image regions. Due to the limited data of geoscience images, we use a MaskMix based data augmentation method to exploit more information from limited geoscience image data. The experimental results on three public geoscience datasets for remote sensing scene recognition, cross-view geolocation and semantic segmentation tasks respectively show the effectiveness and accuracy of the proposed method.
translated by 谷歌翻译
In dense image segmentation tasks (e.g., semantic, panoptic), existing methods can hardly generalize well to unseen image domains, predefined classes, and image resolution & quality variations. Motivated by these observations, we construct a large-scale entity segmentation dataset to explore fine-grained entity segmentation, with a strong focus on open-world and high-quality dense segmentation. The dataset contains images spanning diverse image domains and resolutions, along with high-quality mask annotations for training and testing. Given the high-quality and -resolution nature of the dataset, we propose CropFormer for high-quality segmentation, which can improve mask prediction using high-res image crops that provide more fine-grained image details than the full image. CropFormer is the first query-based Transformer architecture that can effectively ensemble mask predictions from multiple image crops, by learning queries that can associate the same entities across the full image and its crop. With CropFormer, we achieve a significant AP gain of $1.9$ on the challenging fine-grained entity segmentation task. The dataset and code will be released at http://luqi.info/entityv2.github.io/.
translated by 谷歌翻译
深度学习的繁荣有助于场景文本检测的快速进步。在所有具有卷积网络的方法中,基于细分的方法在检测任意形状和极端纵横比的文本实例方面的优越性,引起了广泛的关注。但是,自下而上的方法仅限于其分割模型的性能。在本文中,我们提出了DPTNET(双路线变压器网络),这是一种简单而有效的体系结构,可为场景文本检测任务建模全局和本地信息。我们进一步提出了一种平行的设计,将卷积网络与强大的自我发场机制相结合,以在注意力路径和卷积路径之间提供互补的线索。此外,开发了两个路径上的双向相互作用模块,以提供通道和空间尺寸的互补线索。我们还通过向其添加额外的多头注意力层来升级集中操作。我们的DPTNET在MSRA-TD500数据集上实现了最先进的结果,并就检测准确性和速度提供了其他标准基准的竞争结果。
translated by 谷歌翻译
训练前轨迹嵌入是空间轨迹挖掘中的一个基本和关键程序,对各种下游任务都是有益的。产生有效轨迹嵌入的关键是从轨迹(包括运动模式和旅行目的)中提取高级旅行语义,并考虑轨迹的长期空间时间相关性。尽管有现有的努力,但训练前轨迹嵌入仍存在重大挑战。首先,常用的生成借个任务不适合从轨迹中提取高级语义。其次,现有的数据增强方法非常适合轨迹数据集。第三,当前的编码器设计无法完全合并轨迹中隐藏的长期时空相关性。为了应对这些挑战,我们提出了一种新型的对比性时空轨迹嵌入(CSTTE)模型,用于学习全面的轨迹嵌入。 CSTTE采用了对比度学习框架,以使其借口任务对噪音具有牢固的态度。一种专门设计的轨迹数据增强方法与对比度借口任务相结合,以保留高级旅行语义。我们还构建了有效的时空轨迹编码器,以有效,全面地对轨迹中的长期空间 - 周期性相关性进行建模。与现有的轨迹嵌入方法相比,对两个下游任务和三个现实世界数据集进行了广泛的实验证明了我们的模型的优势。
translated by 谷歌翻译
我们提出了Pangu-Coder,这是一种仅预读的解码器语言模型,该模型采用pangu-alpha架构进行文本到代码生成,即给定自然语言问题描述的编程语言解决方案的合成。我们使用两阶段策略训练Pangu-Coder:第一阶段采用因果语言建模(CLM)来预先培训原始编程语言数据,而第二阶段则使用因果语言建模和掩盖语言建模(MLM)的组合培训目标,专注于文本到代码生成的下游任务,并培训松散的自然语言程序定义和代码功能。最后,我们讨论了pangu-coder-ft,该pander the是通过竞争性编程问题和代码与持续集成测试的结合进行了微调的。我们评估了pangu-coder,重点是它是否生成功能上正确的程序,并证明它在参加较小的上下文窗口和较少的数据培训的同时,它比诸如Codex之类的类似大小的模型(例如Codex)实现等效性或更好的性能。
translated by 谷歌翻译
自然语言处理(NLP)推论正在看到移动应用程序的采用量增加,在此,对于至关重要的保留用户数据隐私和避免网络往返的推论是必需的。然而,NLP模型的前所未有的大小强调了延迟和内存,这是移动设备的两个关键资源。为了满足目标延迟,将整个模型保存在内存中会尽快启动执行,但将一个应用程序的内存足迹增加了几次,将其收益限制为仅在被移动内存管理回收之前的一些推论。另一方面,从存储按需加载模型会导致几秒钟的io长,远远超过了用户满足的延迟范围;由于IO和计算延迟之间的偏斜度很大,因此管道层的模型加载和执行也不会隐藏IO。为此,我们提出了Speedy Transformer推断(STI)。 STI建立在模型最重要的部分上最大化IO/计算资源利用率的关键思想,通过两种新颖的技术来调和延迟/记忆张力。首先,模型碎片。 STI将模型参数视为独立可调的碎片,并介绍了其对准确性的重要性。其次,带有预紧缓冲液的弹性管道计划。 STI实例化IO/计算管道,并使用一个小的缓冲区进行预加载碎片来进行引导执行,而不会在早期阶段停滞不前;它根据资源弹性执行的重要性明智地选择,调音和汇编碎片,从而最大程度地提高推理精度。在两个商品SoC上,我们在实用的目标潜伏期以及CPU和GPU上建立了STI并根据广泛的NLP任务进行评估。我们证明,STI提供高精度的高度较低的记忆级,表现优于竞争基准。
translated by 谷歌翻译